Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vflush: Speed up reclaim by doing less in the loop #328

Merged

Conversation

EchterAgo
Copy link

This removes leaving the vnode_all_list_lock in the loop as that is not needed. It also only enters the v_mutex of nodes that are not VNODE_DEAD yet.

This converts part of the loop to a new function called flush_file_objects to make it more readable.

This also removes the restart of the loop, which is safe because vnode_all_list_lock is never unlocked.

@EchterAgo EchterAgo force-pushed the speed_up_reclaim branch 3 times, most recently from 08a62a8 to 920fc60 Compare November 10, 2023 03:51
@EchterAgo EchterAgo force-pushed the speed_up_reclaim branch 2 times, most recently from 94aa8e8 to d3596bf Compare November 10, 2023 05:35
@EchterAgo
Copy link
Author

@lundman do you want #323? If not, I'll push this without it.

`KeQueryTickCount` seems to only have a 15.625ms resolution unless the
interrupt timer frequency is increased, which should be avoided due to
power usage.

Instead, this switches the `zfs_lbolt`, `gethrtime` and
`random_get_bytes` to use `KeQueryPerformanceCounter`.

On my system this gives a 100ns resolution.

Signed-off-by: Axel Gembe <[email protected]>
One less division for each call.

Signed-off-by: Axel Gembe <[email protected]>
This shows how many reclaims have been processed in thousand increments
and also how many reclaims are processed per second.

Signed-off-by: Axel Gembe <[email protected]>
@lundman
Copy link

lundman commented Nov 10, 2023

Who doesn't like statistics - apart from every human - but sometimes it can be useful, so leave it in there

This removes leaving the `vnode_all_list_lock` in the loop as that is
not needed. It also only enters the `v_mutex` of nodes that are not
`VNODE_DEAD` yet.

This converts part of the loop to a new function called
`flush_file_objects` to make it more readable.

This also removes the restart of the loop, which is safe because
`vnode_all_list_lock` is never unlocked.

Signed-off-by: Axel Gembe <[email protected]>
@lundman lundman merged commit aa83204 into openzfsonwindows:zfs-Windows-2.2.0-release Nov 10, 2023
7 checks passed
@EchterAgo EchterAgo deleted the speed_up_reclaim branch November 10, 2023 05:39
lundman pushed a commit that referenced this pull request Dec 11, 2023
* spl-time: Use KeQueryPerformanceCounter instead of KeQueryTickCount

`KeQueryTickCount` seems to only have a 15.625ms resolution unless the
interrupt timer frequency is increased, which should be avoided due to
power usage.

Instead, this switches the `zfs_lbolt`, `gethrtime` and
`random_get_bytes` to use `KeQueryPerformanceCounter`.

On my system this gives a 100ns resolution.

Signed-off-by: Axel Gembe <[email protected]>

* spl-time: Add assertion to gethrtime and cache NANOSEC / freq division

One less division for each call.

Signed-off-by: Axel Gembe <[email protected]>

* vflush: Print reclaim statistics

This shows how many reclaims have been processed in thousand increments
and also how many reclaims are processed per second.

Signed-off-by: Axel Gembe <[email protected]>

* vflush: Speed up reclaim by doing less in the loop

This removes leaving the `vnode_all_list_lock` in the loop as that is
not needed. It also only enters the `v_mutex` of nodes that are not
`VNODE_DEAD` yet.

This converts part of the loop to a new function called
`flush_file_objects` to make it more readable.

This also removes the restart of the loop, which is safe because
`vnode_all_list_lock` is never unlocked.

Signed-off-by: Axel Gembe <[email protected]>

---------

Signed-off-by: Axel Gembe <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants